-
Couldn't load subscription status.
- Fork 179
✨(schema): Add YAML schema deparser for LLM context optimization #3751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Unexpected error on running flow
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Updates to Preview Branch (yaml-parser) ↗︎
Tasks are run on every commit but only new migration files are pushed.
View logs for this Workflow Run ↗︎. |
WalkthroughReplaces convertSchemaToText with yamlSchemaDeparser and neverthrow Result handling across agent nodes; removes the old text utility; adds a YAML schema deparser (with tests) and Result-returning deparser type changes plus legacy types; updates PostgreSQL deparsers to legacy types and exports yamlSchemaDeparser. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Node as Workflow Node
participant Cfg as getConfigurable(config)
participant Yaml as yamlSchemaDeparser(schemaData)
participant Comb as Result.combine
participant Agent as Domain Agent
Node->>Cfg: build configurable (Result)
Node->>Yaml: deparse schema -> Result
Node->>Comb: combine([configurable, schemaYAML])
alt combine is Err
Comb-->>Node: Err
Node-->>Node: throw WorkflowTerminationError
else combine is Ok
Comb-->>Node: Ok([configurable, schemaText])
Node->>Agent: invoke with config + schemaText
Agent-->>Node: Result (success / error)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Check changeset necessityStatus: REQUIRED Reason:
Changeset (copy & paste):---
"@liam-hq/schema": minor
---
- ✨ Add YAML schema deparser optimized for LLM context
- Introduce `yamlSchemaDeparser(schema) -> Result<string, Error>` that serializes Schema to compact YAML (removes nulls)
- Export new neverthrow-based deparser types: `SchemaDeparser`, `OperationDeparser`
- Keep backward-compatible aliases as `LegacySchemaDeparser`, `LegacyOperationDeparser`
- Add `yaml` dependency and export `yamlSchemaDeparser` from package entrypoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (1)
7-15: Filter null entries out of arrays as wellThe current helper drops null object properties but preserves null items inside arrays, so any
Array<... | null>fields still leaknullinto the YAML and undercut the “remove null values” goal. Please filter them out before mapping.- if (Array.isArray(obj)) { - return obj.map(removeNullValues) - } + if (Array.isArray(obj)) { + return obj.filter((item) => item !== null).map(removeNullValues) + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts(2 hunks)frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts(2 hunks)frontend/internal-packages/agent/src/utils/convertSchemaToText.ts(0 hunks)frontend/packages/schema/package.json(1 hunks)frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts(2 hunks)frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts(2 hunks)frontend/packages/schema/src/deparser/type.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/index.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts(1 hunks)frontend/packages/schema/src/index.ts(1 hunks)
💤 Files with no reviewable changes (1)
- frontend/internal-packages/agent/src/utils/convertSchemaToText.ts
🧰 Additional context used
📓 Path-based instructions (5)
frontend/packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Shared libraries and tools live under frontend/packages
Files:
frontend/packages/schema/package.jsonfrontend/packages/schema/src/deparser/yaml/index.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/packages/schema/src/index.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/type.tsfrontend/packages/schema/src/deparser/postgresql/schemaDeparser.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Name utility files in camelCase (e.g., mergeSchema.ts)
Files:
frontend/packages/schema/src/deparser/yaml/index.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.tsfrontend/packages/schema/src/index.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/type.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/packages/schema/src/deparser/postgresql/schemaDeparser.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript/TSX across the codebase
**/*.{ts,tsx}: Use runtime type validation with valibot for external data validation
Prefer early returns for readability
Write simple, direct code without backward compatibility shims; update all call sites together
Use const-assigned arrow functions instead of function declarations for small utilities (e.g., const toggle = () => {})
Follow existing import patterns and tsconfig path aliases
Files:
frontend/packages/schema/src/deparser/yaml/index.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.tsfrontend/packages/schema/src/index.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/type.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/packages/schema/src/deparser/postgresql/schemaDeparser.tsfrontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
frontend/internal-packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Files:
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Write unit tests with filenames ending in .test.ts or .test.tsx colocated near source
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
🧠 Learnings (2)
📚 Learning: 2025-06-26T05:50:27.762Z
Learnt from: MH4GF
PR: liam-hq/liam#2225
File: frontend/packages/db-structure/src/deparser/postgresql/schemaDeparser.ts:14-14
Timestamp: 2025-06-26T05:50:27.762Z
Learning: In the PostgreSQL schema deparser (frontend/packages/db-structure/src/deparser/postgresql/schemaDeparser.ts), type assertions like `as Table[]` and `as Index[]` are acceptable when used with Object.values() because the schema structure is guaranteed to be correct upstream, making the assertions safe.
Applied to files:
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
📚 Learning: 2025-09-04T09:48:12.223Z
Learnt from: FunamaYukina
PR: liam-hq/liam#3331
File: frontend/internal-packages/agent/src/workflow/nodes/validateInitialSchemaNode.ts:94-103
Timestamp: 2025-09-04T09:48:12.223Z
Learning: The postgresqlSchemaDeparser function from liam-hq/schema always returns a string type in the value property (even empty strings for empty schemas), and executeQuery properly handles empty strings. Therefore, using !ddlStatements is sufficient to check for empty/invalid DDL without needing to explicitly check for empty arrays.
Applied to files:
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
🧬 Code graph analysis (7)
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (2)
frontend/packages/schema/src/index.ts (2)
postgresqlOperationDeparser(2-2)LegacyOperationDeparser(6-6)frontend/packages/schema/src/deparser/type.ts (1)
LegacyOperationDeparser(25-27)
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts (2)
frontend/internal-packages/agent/src/utils/getConfigurable.ts (1)
getConfigurable(10-31)frontend/internal-packages/agent/src/utils/errorHandling.ts (1)
WorkflowTerminationError(5-15)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts (1)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (1)
yamlSchemaDeparser(20-35)
frontend/packages/schema/src/deparser/type.ts (1)
frontend/packages/schema/src/index.ts (6)
LegacySchemaDeparser(7-7)Schema(62-62)LegacyOperationDeparser(6-6)Operation(38-38)SchemaDeparser(9-9)OperationDeparser(8-8)
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (3)
frontend/internal-packages/agent/src/utils/getConfigurable.ts (1)
getConfigurable(10-31)frontend/internal-packages/agent/src/utils/errorHandling.ts (1)
WorkflowTerminationError(5-15)frontend/internal-packages/agent/src/pm-agent/invokePmAnalysisAgent.ts (1)
invokePmAnalysisAgent(44-83)
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts (1)
frontend/packages/schema/src/deparser/type.ts (1)
LegacySchemaDeparser(19-19)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (2)
frontend/packages/schema/src/deparser/yaml/index.ts (1)
yamlSchemaDeparser(1-1)frontend/internal-packages/neverthrow/src/index.ts (1)
fromThrowable(11-16)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: frontend-lint
- GitHub Check: frontend-ci
- GitHub Check: Supabase Preview
- GitHub Check: security-review
- GitHub Check: agent-deep-modeling
🔇 Additional comments (7)
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts (1)
17-24: LGTM! Clear deprecation path for legacy type.The deprecation notice clearly indicates the planned migration to the new Result-based type system. The type change from
SchemaDeparsertoLegacySchemaDeparseris consistent with the broader deparser type refactoring.frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts (1)
39-43: LGTM! Error handling is appropriate for this context.The Result-based error handling pattern is correctly implemented. The error is thrown directly, which is suitable for this test case generation subgraph context.
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts (1)
50-54: LGTM! Consistent error handling within QA agent.The implementation follows the same error handling pattern as
generateTestcaseNode.ts, which is appropriate for the QA agent's test case generation flow.frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts (1)
18-26: LGTM! Elegant use of Result.combine for multiple operations.The combined error handling pattern using
Result.combinecleanly coordinates both configuration retrieval and schema deparsing. The destructuring ofrepositoriesandschemaTextis clear and type-safe.frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (1)
17-29: LGTM! Consistent Result.combine pattern.The implementation follows the same Result.combine pattern as
designSchemaNode.ts, appropriately wrapping errors inWorkflowTerminationErrorfor the workflow context.frontend/packages/schema/src/index.ts (1)
5-11: LGTM! Public API expanded appropriately.The exports correctly include both legacy types (for backward compatibility during migration) and the new
yamlSchemaDeparser, maintaining a clear migration path without breaking existing code.frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (1)
608-615: LGTM! Consistent deprecation strategy.The deprecation notice and type migration from
OperationDeparsertoLegacyOperationDeparserfollows the same pattern asschemaDeparser.ts, maintaining consistency across the deparser module.
🤖 Agent Deep Modeling ExecutionStarted at: 2025-10-14 04:01:30 UTC Command Output
RUN v3.2.4 /home/runner/work/liam/liam/frontend/internal-packages/agent (node:7704) ExperimentalWarning: WASI is an experimental feature and might change at any time ✅ [INFO] 2025-10-14T04:01:33.633Z 🧠 Thinking (pm)... Defining user management system requirements I'm focusing on outlining the current schema structure, which will be crucial for the database agent to design based on our requirements. The business requirements document (BRD) needs to be clear yet thorough. I’m breaking down key components like the goal of a simple user management system, identifying stakeholders like End Users and Admins, and detailing use cases, including user registration options, admin-created users, authentication processes, password management, profile management, and role management. I’m also considering how to handle user activation and deletion, including soft-deletion with audit trails. Detailing user management system components I'm outlining more details for the user management system, including data entities and attributes like User, Role, and associated permissions. For instance, the User entity might have various fields like username, email, password, and status indicators. I’m also considering audit logs for accountability. Moreover, I must incorporate business rules like unique usernames/emails, password policies, and necessary user status checks, such as requiring email verification for login. Performance and security are also critical, as well as basic compliance measures like GDPR. I wonder if I should define API endpoints, but I think it may not be necessary right now. Planning test case categories I’m organizing my test cases into several categories like Users, Authentication, Roles & Permissions, and more. For each category, I’ll include tests for operations like insert, update, delete, and select, ensuring we cover both positive and negative scenarios. I need to label each test case with a 'type' to clarify whether it’s an INSERT, UPDATE, DELETE, or SELECT. It's important that these categories appear as keys in the test case object, and I’ll provide clear titles. I’ll also begin with a concise checklist of tasks to keep it conceptual, rather than diving into implementation details right away. Structuring requirements and tests I’m planning out a clear process that involves defining our goals, scope, and roles first. After that, I’ll translate these into business and functional requirements, ensuring I document any assumptions. Next, I'll specify the necessary data entities and any governing rules. Non-functional requirements and constraints will be outlined too. I’ll categorize acceptance test scenarios to make them useful. Before any significant tool call, I need to state the purpose and minimal inputs concisely, like "Purpose: Save analyzed requirements; Inputs: goal summary and categorized test cases." Each test case must be actionable for QA too! Creating structured test cases I realize that deactivation is an UPDATE action while preventing login for deactivated accounts falls under SELECT. The tool allows one type per test case, so I can define separate tests for these scenarios. I’ll ensure negative tests for password policy and edge cases like truncation and case-insensitive search are included. Next, I’ll form categories like User Registration, Authentication, Email Verification, User Profile Management, Roles & Permissions, User Status, and Auditing, ensuring each test case is clear with a title and type. I’m focusing on a concise set of around 30 test cases to maintain clarity and coverage, and before every tool call, I’ll include a purpose statement with inputs. ⏺ (pm): Checklist
Business Requirements Document (BRD): Simple User Management System
5.2 Authentication and Credentials
5.3 Roles and Permissions
5.4 Search, Filter, Sort, Pagination
5.5 Auditing
Purpose: Save analyzed requirements artifact; Inputs: goal summary and categorized testcases. 🧠 Thinking (pm)... Structuring Test Cases I need to structure the requirements into test cases for the Business Requirement Document (BRD). It looks like we’re focusing on major categories like User Registration, Authentication, Role Management, and more. Each test case must include types like INSERT, UPDATE, DELETE, and SELECT, covering both positive and negative scenarios. Since the current schema is empty, I’ll craft titles that reflect the business scenarios and their corresponding SQL operations. This is shaping up to be quite interesting! Defining Test Cases for SQL Operations I need to include test cases that verify unique constraints to prevent duplicates, like rejecting duplicate emails during user creation with INSERT. For authentication, I'll create test cases using SELECT to fetch verified users and UPDATE to change the last_login_at. Email verification will involve updating the user's status from pending to active, while token consumption will be another UPDATE, and creation will be an INSERT. For role management, I'll ensure correct mappings and include DELETE for revocation, while profile updates and status management will also use UPDATE. I need to think through these cases thoroughly! Clarifying SQL Operations I need to ensure I’m focusing on SQL operation types as requested. For login, the test case for rejecting logins for inactive users will be a SELECT. In terms of soft deletes, I'll perform an UPDATE to set deleted_at and status to deleted, with the negative test ensuring they aren’t included in lists. I also want to verify unique constraints still hold. For search and list functionalities, I’ll consider SELECT with filters, pagination, and sorting. For validation, I can craft negative scenarios around weak passwords. Lastly, I see account lockout needs additional fields like failed_login_count and locked_until in our entities to support this feature and avoid QA issues. Updating Data Entities and Finalizing Test Cases To enhance our data entities, I’ll add fields for failed_login_attempts (int) and locked_until (nullable timestamp) to the User entity as noted in the BRD. We also included separate tables for email verification and password reset tokens, which is good. Now, I'm finalizing the test case categories, which will include User Registration and Verification, Authentication and Lockout, Role Management, and more. I aim for 3-6 concise test cases per category to ensure comprehensive acceptance scenarios. Each test case will need a clear title and type. Let's get composing! Finalizing Requirements and Test Cases After building the test cases, I'll use the saveRequirementsToArtifactTool with a summary describing the changes, confirming success afterward. I also need to modify the BRD to add fields for lockout, indices, and constraints for audit immutability. I'll outline the final requirements with the checklist, ensuring the "Acceptance Test Scenarios" section references the artifact instead of listing everything. Each test case should systematically cover specific actions and SQL operation types, categorized clearly, aiming for conciseness yet comprehensive coverage. Let’s get started! Finalizing Test Cases and Requirements Overview I need to confirm that all test cases contain only the "title" and "type" fields. When using the "Save Requirements" tool, I need to format it like this: { Before the tool call, I should state the purpose along with minimal inputs: "Purpose: Save analyzed requirements artifact; Inputs: goal summary and categorized test cases." After the tool runs, I’ll confirm with "Requirements saved successfully." Additionally, I’ll mention the need for indexes in the BRD for optimal DB design. Now, I should compose the final message accordingly! ⏺ (pm): Purpose: Save analyzed requirements artifact; Inputs: goal summary and categorized testcases. 🔧 saveRequirementsToArtifactTool() ⎿ unknown: Requirements saved successfully to artifact 🧠 Thinking (db)... Designing a schema plan I need to create a planning checklist with around 3-7 bullet points and clearly state my purpose before making the tool call with minimal inputs. My next step is to call the schema design tool with JSON operations to develop tables, enums, constraints, and more. I want to ensure the design includes user-related elements, tracking failed login attempts, check constraints for status enums, and unique role names. I need to consider the limitations on using foreign keys for deleting users and possibly explore check constraints. Implementing check constraints I'm considering using the CHECK constraint type with details for clearer enforcement. For example, I want to add check constraints for email and phone formats, as well as potentially validating the password hash length to ensure it's at least 60 characters. However, enforcing password strength at the database level is tricky for hashed passwords, so I’ll state that enforcement occurs at the application level. Once I've set up the tables with columns and constraints, I’ll make the tool call and include validation lines as per the guidelines, followed by confirmation. 🔧 schemaDesignTool() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { ⏺ (qa): { 🔧 saveTestcase()n\n-- Assumptions:\n-- - We are simulating-null deleted_at\n\n-- a registration attempt by inserting a1) Create a few users including a soft new row into users.\n-- - The email must-deleted one to simulate real data be unique (idx_users\nINSERT INTO users (id, email, status_email constraint, metadata, username, full).\n--_name, created_at, updated_at, password_hash, failed - UUIDs are generated with gen_random_attempts_count_uuid() as)\n default for id.\n-- - OtherVALUES\n NOT NULL (gen_random_uuid fields are satisfied by defaults(), '[email protected] where provided.\n\nBEGIN;\n\n-- 1) Insert an', 'active'::user_status, '{}' initial user to::jsonb, 'alice establish an existing email in_01', the system\nINSERT 'Alice Example INTO users', now(), now(), (\n '$(pw ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase())\n ON CON When a user successfully logs in, record the " timestamp of the last successful login.\n\n-- Preconditions (example_name,\n created_at,\n updated 🔧 saveTestcase() of realistic data for a test run): ensure a user exists with a_at,\n password_hash,\n failed_attempts_count\n) VALUES (\FLICT DO NOTHING;\n\nINSERT INTO users (id, email, status, metadata, username, full_name, createdsql": "\n known id.\n-- We will update last_login_at to the current time for a specific user_idn gen_random_uuid(),\n '[email protected]',\n 'active'::user_status,\n '{}'::json_at, updated_at, password_hash, deleted_at, failed_attempts_count)\nVALUES\n (gen_random_uuid(), 🔧 saveTestcase() after a successful login.\n\n-- Replace :target_user_id with the actual UUID of the user performing the login in your test run.\n\nUPDATE users\nSETb,\n 'existing_user',\n 'Existing User',\n now(),\n now(),\n '$2b$12$abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv', -- 60+ chars placeholder hash\n 0\n);\n\n-- 2) Attempt to register '[email protected]', 'active'::user_status, '{}'::jsonb, 'bob_01', 'Bob Example', now(), now(), '$(pw2)bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb', NULL, 0)\n ON CONFLICT DO NOTHING;\n\n-- ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { ⏺ (qa): { ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() last_login_at = now(),\n updated_at = now()\nWHERE id = :target_user_id\n AND status != 'deleted'::user_status\n AND (deleted_at IS NULL OR deleted_at > now())\n;\n\n-- Optional: If you want to capture this login event in audit_logs for traceability, you can insert an audit record as well.\nINSERT INTO audit_logs (action, details, actor_id, immutable, object_type, created_at)\nVALUES (\n 'auth.success',\n jsonb_build_object('user_id', :target_user_id, 'refresh_last_login', (SELECT last_login_at FROM users WHERE id = :target_user_id)),\n :target_user_id,\n true,\n 'user',\n now()\n);\n" ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase()(node:7704) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() ⏺ (qa): { ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase()(node:7704) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase()(node:7704) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase()(node:7704) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() ⏺ (qa): { ⏺ (qa): {"sql":"-- Test Case : ) VALUES Reset password with valid token (\n updates gen_random hash and consumes_uuid(), token\n\n -- Preconditions: 'not-an there exists-email',\ a user and an ' valid passwordpending_reset_tokens':: row. Weuser_status,\ will update then '{}'::json token as consumedb,\n and ensure it now(),\ is not expiredn .\n-- now(), Assumptions\n : input_user_id is the '$2b$ UUID of the12$cccc target usercccccccccccccccccccc and provided_tokencccccccc_hash matchescccccccc the stored hash determined by application logic.\cccccc'\n\nWITHn eligible_token );\nEX AS (\CEPTION WHEN OTHERSn SELECT THEN\n pr.id\n FROM password_reset_tokens -- swallow; test expects failure pr\n\n R WHEREAISE NOTICE pr.user_id = $ 'Expected failure1\n for invalid email AND pr.con format';\nEND\n$$;\sumed_at IS NULL\n ANDn\n-- 4) Edge case pr.expires_at >: attempt to now()\ insert with duplicate email; shouldn ORDER fail due BY pr.expires_at to unique index on email\n ASC\nBEGIN\n INSERT INTO LIMIT 1\n)\ users (\nUPDATE passwordn _reset_tokens id,\n AS pr\n email,\SET\n consumed_atn status,\ = now()\n metadata,\n nWHERE pr created.id = (SELECT id_at,\ FROM eligiblen updated_at,_token)\n password_hashnRETURNING\n pr.id AS ) VALUES token_id (\n , pr.user_id, pr gen_random_uuid.consum(),\n ed '_at,alice.smith@example pr.expires.com',_at;\n\n"} 'pending'::user_status,\n '{}'::jsonb,\n now(),\n now(),\n '$2b$12$dddddddddddddddddddddddddddddddddddddddddddddd'\n );\nEXCEPTION WHEN OTHERS THEN\n RAISE NOTICE 'Expected failure for duplicate email';\nEND;\n\nCOMMIT;\n" ❌ Test Case: Successful self-registration creates pending user and verification tokenError:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a YAML schema deparser to optimize LLM context usage by converting Schema objects to compact YAML format. The new implementation removes null values to minimize token usage while providing better structured output than the previous text-based approach.
- Adds
yamlSchemaDeparserfunction using neverthrow error handling patterns - Migrates all agent nodes from
convertSchemaToTexttoyamlSchemaDeparser - Updates type definitions with legacy annotations for backward compatibility
Reviewed Changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| frontend/packages/schema/src/index.ts | Exports new YAML deparser and updated type definitions |
| frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts | Implements YAML schema deparser with null value removal |
| frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts | Comprehensive test suite for YAML deparser functionality |
| frontend/packages/schema/src/deparser/yaml/index.ts | Export barrel for YAML deparser |
| frontend/packages/schema/src/deparser/type.ts | Updates type definitions with legacy support and neverthrow integration |
| frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts | Marks PostgreSQL deparser as legacy with deprecation notice |
| frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts | Marks PostgreSQL operation deparser as legacy with deprecation notice |
| frontend/packages/schema/package.json | Adds yaml dependency |
| frontend/internal-packages/agent/src/utils/convertSchemaToText.ts | Removes deprecated schema-to-text conversion utility |
| frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts | Migrates to yamlSchemaDeparser with error handling |
| frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts | Migrates to yamlSchemaDeparser with error handling |
| frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts | Migrates to yamlSchemaDeparser using Result.combine |
| frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts | Migrates to yamlSchemaDeparser using Result.combine |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM👍
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as resolved.
This comment was marked as resolved.
…d compatibility - Add new SchemaDeparser and OperationDeparser types returning Result<string, Error> - Mark existing implementations as LegacySchemaDeparser and LegacyOperationDeparser - Add @deprecated decorators and TODO comments for future migration - Maintain backward compatibility with existing PostgreSQL deparers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Add yamlSchemaDeparser to convert Schema objects to compact YAML format. This is optimized for LLM context by removing null values to reduce token usage. - Implement yamlSchemaDeparser with neverthrow error handling - Add null value removal to minimize output size - Add comprehensive tests using _unsafeUnwrap for test assertions - Use yaml package for reliable YAML serialization 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Replace convertSchemaToText with yamlSchemaDeparser for better LLM context optimization through compact YAML format. - Update all agent nodes to use yamlSchemaDeparser - Remove deprecated convertSchemaToText utility - Fix import order for biome lint compliance 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts (2)
14-464: Great coverage; add explicit null-removal test and stabilize snapshots.
- Add a case with a nullable field (e.g., comment: null) and assert it’s omitted to prove the “remove null values” behavior.
- Consider enabling deterministic key ordering in the YAML stringifier (sortMapEntries) to reduce snapshot churn. In schemaDeparser.ts (lines 19-34 from context), set sortMapEntries: true.
Example (in schemaDeparser.ts):
- yaml.stringify(cleanedSchema, { + yaml.stringify(cleanedSchema, { defaultStringType: 'PLAIN', defaultKeyType: 'PLAIN', lineWidth: 0, minContentWidth: 0, + sortMapEntries: true, }),
342-353: Optional: omit empty sections for further token savings.Currently empty maps like tables: {}, enums: {}, extensions: {} are persisted. Consider an option to drop empty sections to reduce tokens (keep default behavior unchanged).
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts (1)
39-44: Wrap deparser errors in WorkflowTerminationError for consistency.Other nodes standardize on WorkflowTerminationError for observability and retries. Do the same here.
Apply:
+import { WorkflowTerminationError } from '../../utils/errorHandling' @@ - if (schemaContextResult.isErr()) { - throw schemaContextResult.error - } + if (schemaContextResult.isErr()) { + throw new WorkflowTerminationError( + schemaContextResult.error, + 'generateTestcaseNode', + ) + }frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (1)
616-807: High cyclomatic complexity; consolidate branch/Result handling.Consider a table-driven dispatch or a helper to unwrap Result and build the legacy shape to remove repeated error/value boilerplate.
Example helper:
const toLegacy = <T>(r: Result<T, Error>, errors: {message: string}[]) => r.isErr() ? { value: '', errors: [{ message: r.error.message }]} : { value: r.value, errors }Then each branch can simply return toLegacy(generateX(operation), errors)
frontend/packages/schema/src/deparser/type.ts (1)
10-13: Export LegacyDeparserResult (and optionally DeparserError) for consumers.If external users need to type the legacy return shape, export these.
-type DeparserError = { +export type DeparserError = { message: string } -type LegacyDeparserResult = { +export type LegacyDeparserResult = { value: string errors: DeparserError[] }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts(2 hunks)frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts(2 hunks)frontend/internal-packages/agent/src/utils/convertSchemaToText.ts(0 hunks)frontend/packages/schema/package.json(1 hunks)frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts(2 hunks)frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts(2 hunks)frontend/packages/schema/src/deparser/type.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/index.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts(1 hunks)frontend/packages/schema/src/index.ts(1 hunks)
💤 Files with no reviewable changes (1)
- frontend/internal-packages/agent/src/utils/convertSchemaToText.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts
- frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts
- frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
- frontend/packages/schema/src/index.ts
- frontend/packages/schema/package.json
- frontend/packages/schema/src/deparser/yaml/index.ts
- frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Name utility files in camelCase (e.g., mergeSchema.ts)
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/packages/schema/src/deparser/type.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Write unit tests with filenames ending in .test.ts or .test.tsx colocated near source
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript/TSX across the codebase
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/packages/schema/src/deparser/type.ts
frontend/packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Shared libraries and tools live under frontend/packages
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/packages/schema/src/deparser/type.ts
frontend/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
frontend/**/*.{ts,tsx}: Use named exports only (no default exports)
Import UI components from @liam-hq/ui when available
Import icons from @liam-hq/ui
Follow existing import patterns and tsconfig path aliases
Use consts instead of function declarations for simple utilities (e.g., const toggle = () => {})
Use runtime type validation with valibot for external data validation
Use early returns for readability
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.tsfrontend/packages/schema/src/deparser/postgresql/operationDeparser.tsfrontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.tsfrontend/packages/schema/src/deparser/type.ts
frontend/**/*.{test,spec}.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Follow principles outlined in @docs/test-principles.md for tests
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
frontend/internal-packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Files:
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.tsfrontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts
🧬 Code graph analysis (4)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts (1)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (1)
yamlSchemaDeparser(20-35)
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (2)
frontend/packages/schema/src/index.ts (2)
postgresqlOperationDeparser(2-2)LegacyOperationDeparser(6-6)frontend/packages/schema/src/deparser/type.ts (1)
LegacyOperationDeparser(25-27)
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (3)
frontend/internal-packages/agent/src/utils/getConfigurable.ts (1)
getConfigurable(10-31)frontend/internal-packages/agent/src/utils/errorHandling.ts (1)
WorkflowTerminationError(5-15)frontend/internal-packages/agent/src/pm-agent/invokePmAnalysisAgent.ts (1)
invokePmAnalysisAgent(44-83)
frontend/packages/schema/src/deparser/type.ts (1)
frontend/packages/schema/src/index.ts (6)
LegacySchemaDeparser(7-7)Schema(62-62)LegacyOperationDeparser(6-6)Operation(38-38)SchemaDeparser(9-9)OperationDeparser(8-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Supabase Preview
- GitHub Check: Supabase Preview
- GitHub Check: frontend-lint
- GitHub Check: frontend-ci
- GitHub Check: agent-deep-modeling
- GitHub Check: Supabase Preview
🔇 Additional comments (4)
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts (1)
8-8: LGTM: migrated to yamlSchemaDeparser.Import looks correct and aligns with the new deparser API.
frontend/packages/schema/src/deparser/type.ts (1)
29-31: Neverthrow-based types: LGTM.Updated SchemaDeparser/OperationDeparser signatures look good.
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (1)
17-35: Approve migration completion: no stale convertSchemaToText or OperationDeparser references remain.frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (1)
56-56: No consumers of OperationDeparser detected
Scoped search found no references toOperationDeparser;postgresqlOperationDeparsercorrectly remainsLegacyOperationDeparser.
Issue
Why is this change needed?
This PR adds a new YAML schema deparser optimized for LLM context usage. The deparser converts Schema objects to compact YAML format by removing null values, significantly reducing token usage when passing schema information to LLMs.
Changes Made
Schema Package (
@liam-hq/schema):yamlSchemaDeparserfunction that converts Schema objects to YAML format_unsafeUnwrapfor test assertionsyamlpackage for reliable YAML serializationfromThrowableAgent Package (
@liam-hq/agent):yamlSchemaDeparserinstead ofconvertSchemaToTextconvertSchemaToTextutilityBenefits
Example Output
Before (convertSchemaToText):
After (yamlSchemaDeparser):
Summary by CodeRabbit